364,760 research outputs found

    Emotion sensing from head motion capture

    Get PDF
    Computational analysis of emotion from verbal and non-verbal behavioral cues is critical for human-centric intelligent systems. Among the non-verbal cues, head motion has received relatively less attention, although its importance has been noted in several research. We propose a new approach for emotion recognition using head motion captured using Motion Capture (MoCap). Our approach is motivated by the well known kinesics-phonetic analogy, which advocates that, analogous to human speech being composed of phonemes, head motion is composed of kinemes i.e., elementary motion units. We discover a set of kinemes from head motion in an unsupervised manner by projecting them onto a learned basis domain and subsequently clustering them. This transforms any head motion to a sequence of kinemes. Next, we learn the temporal latent structures within the kineme sequence pertaining to each emotion. For this purpose, we explore two separate approaches – one using Hidden Markov Model and another using artificial neural network. This class-specific, kineme-based representation of head motion is used to perform emotion recognition on the popular IEMOCAP database. We achieve high recognition accuracy (61.8% for three class) for various emotion recognition tasks using head motion alone. This work adds to our understanding of head motion dynamics, and has applications in emotion analysis and head motion animation and synthesis

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario

    On the role of head motion in affective expression

    Get PDF
    Non-verbal behavioral cues, such as head movement, play a significant role in human communication and affective expression. Although facial expression and gestures have been extensively studied in the context of emotion understanding, the head motion (which accompany both) is relatively less understood. This paper studies the significance of head movement in adult's affect communication using videos from movies. These videos are taken from the Acted Facial Expression in the Wild (AFEW) database and are labeled with seven basic emotion categories: anger, disgust, fear, joy, neutral, sadness, and surprise. Considering human head as a rigid body, we estimate the head pose at each video frame in terms of the three Euler angles, and obtain a time-series representation of head motion. First, we investigate the importance of the energy of angular head motion dynamics (displacement, velocity and acceleration) in discriminating among emotions. Next, we analyze the temporal variation of head motion by fitting an autoregressive model to the head motion time series. We observe that head motion carries sufficient information to distinguish any emotion from the rest with high accuracy and this information is complementary to that of facial expression as it helps improve emotion recognition accuracy

    Real-Time Head Gesture Recognition on Head-Mounted Displays using Cascaded Hidden Markov Models

    Full text link
    Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments have been rarely investigated. In the current study, we present an approach for real-time head gesture recognition on head-mounted displays using Cascaded Hidden Markov Models. We conducted two experiments to evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden Markov Models and assessed the offline classification performance using collected head motion data. In experiment 2, we characterized the real-time performance of the approach by estimating the latency to recognize a head gesture with recorded real-time classification data. Our results show that the proposed approach is effective in recognizing head gestures. The method can be integrated into a virtual reality system as a head gesture interface for interacting with virtual worlds

    Facial Expression Recognition in the Presence of Head Motion

    Get PDF

    A time series feature of variability to detect two types of boredom from motion capture of the head and shoulders

    Get PDF
    Boredom and disengagement metrics are crucial to the correctly timed implementation of adaptive interventions in interactive systems. psychological research suggests that boredom (which other HCI teams have been able to partially quantify with pressure-sensing chair mats) is actually a composite: lethargy and restlessness. Here we present an innovative approach to the measurement and recognition of these two kinds of boredom, based on motion capture and video analysis of changes in head and shoulder positions. Discrete, three-minute, computer-presented stimuli (games, quizzes, films and music) covering a spectrum from engaging to boring/disengaging were used to elicit changes in cognitive/emotional states in seated, healthy volunteers. Interaction with the stimuli occurred with a handheld trackball instead of a mouse, so movements were assumed to be non-instrumental. Our results include a feature (standard deviation of windowed ranges) that may be more specific to boredom than mean speed of head movement, and that could be implemented in computer vision algorithms for disengagement detection

    Wheelchair control by head motion

    Get PDF
    Electric wheelchairs are designed to aid paraplegics. Unfortunately, these can not be used by persons with higher degree of impairment, such as quadriplegics, i.e. persons that, due to age or illness, can not move any of the body parts, except of the head. Medical devices designed to help them are very complicated, rare and expensive. In this paper a microcontroller system that enables standard electric wheelchair control by head motion is presented. The system comprises electronic and mechanic components. A novel head motion recognition technique based on accelerometer data processing is designed. The wheelchair joystick is controlled by the system’s mechanical actuator. The system can be used with several different types of standard electric wheelchairs. It is tested and verified through an experiment performed within this paper

    Accelerometer based gesture recognition robot

    Get PDF
    Gesture recognition can be termed as an approach in this direction. It is the process by which the gestures made by the user are recognized by the receiver. Gestures are expressive, meaningful body motions involving physical movements of the fingers, hands, arms, head, face, or body with the intent of: conveying meaningful information orinteracting with the environment. They constitute one interesting small subspace of possible human motion. A gesture may also be perceived by the environment as a compression technique for the information to be transmitted elsewhere and subsequently reconstructed by the receiver. Classification hand and arm gestures: Recognition of hand poses, sign languages, and entertainment applications. head and face gestures: Nodding or shaking of head; direction of eye gaze; etc.; body gestures: involvement of full body motion, as in; tracking movements of two people interacting outdoors; analyzing movements of a dancer for generating matching music and graphics; Benefits: A human computer interface can be provided using gestures: Replace mouse and keyboard Pointing gestures Navigate in a virtual environment Pick up and manipulate virtual objects Interact with the 3D worl
    corecore